Scikit-Learn includes a number of datasets to practice with, and many machine learning algorithms.


In [82]:
from sklearn import datasets, neighbors, linear_model

digits = datasets.load_digits() # Retrieves digits dataset from scikit-learn

In [29]:
print(digits['DESCR'])


Optical Recognition of Handwritten Digits Data Set
===================================================

Notes
-----
Data Set Characteristics:
    :Number of Instances: 5620
    :Number of Attributes: 64
    :Attribute Information: 8x8 image of integer pixels in the range 0..16.
    :Missing Attribute Values: None
    :Creator: E. Alpaydin (alpaydin '@' boun.edu.tr)
    :Date: July; 1998

This is a copy of the test set of the UCI ML hand-written digits datasets
http://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits

The data set contains images of hand-written digits: 10 classes where
each class refers to a digit.

Preprocessing programs made available by NIST were used to extract
normalized bitmaps of handwritten digits from a preprinted form. From a
total of 43 people, 30 contributed to the training set and different 13
to the test set. 32x32 bitmaps are divided into nonoverlapping blocks of
4x4 and the number of on pixels are counted in each block. This generates
an input matrix of 8x8 where each element is an integer in the range
0..16. This reduces dimensionality and gives invariance to small
distortions.

For info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G.
T. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C.
L. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469,
1994.

References
----------
  - C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their
    Applications to Handwritten Digit Recognition, MSc Thesis, Institute of
    Graduate Studies in Science and Engineering, Bogazici University.
  - E. Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.
  - Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin.
    Linear dimensionalityreduction using relevance weighted LDA. School of
    Electrical and Electronic Engineering Nanyang Technological University.
    2005.
  - Claudio Gentile. A New Approximate Maximal Margin Classification
    Algorithm. NIPS. 2000.

What does our data look like?

This is how we represent a handwritten '0' character - values with a 0 are dark, and comparatively higher values are much lighter.

We have a . suffix, to indicate this is a floating point number, for more accuracy in computations.


In [39]:
digits['images'][0]


Out[39]:
array([[  0.,   0.,   5.,  13.,   9.,   1.,   0.,   0.],
       [  0.,   0.,  13.,  15.,  10.,  15.,   5.,   0.],
       [  0.,   3.,  15.,   2.,   0.,  11.,   8.,   0.],
       [  0.,   4.,  12.,   0.,   0.,   8.,   8.,   0.],
       [  0.,   5.,   8.,   0.,   0.,   9.,   8.,   0.],
       [  0.,   4.,  11.,   0.,   1.,  12.,   7.,   0.],
       [  0.,   2.,  14.,   5.,  10.,  12.,   0.,   0.],
       [  0.,   0.,   6.,  13.,  10.,   0.,   0.,   0.]])

In [75]:
import matplotlib.pyplot as plt
plt.gray() 
plt.matshow(digits.images[0])
plt.matshow(digits.images[10])
plt.show()


<matplotlib.figure.Figure at 0xa84f7b8>

In [69]:
for i in range(0,10):
    plt.matshow(digits.images[i]) 
    
plt.show()


Extract our input data (X digits), our target output data (Y digits) and the number of samples we will process.


In [48]:
X_digits = digits.data
X_digits


Out[48]:
array([[  0.,   0.,   5., ...,   0.,   0.,   0.],
       [  0.,   0.,   0., ...,  10.,   0.,   0.],
       [  0.,   0.,   0., ...,  16.,   9.,   0.],
       ..., 
       [  0.,   0.,   1., ...,   6.,   0.,   0.],
       [  0.,   0.,   2., ...,  12.,   0.,   0.],
       [  0.,   0.,  10., ...,  12.,   1.,   0.]])

In [49]:
y_digits = digits.target
y_digits


Out[49]:
array([0, 1, 2, ..., 8, 9, 8])

In [54]:
n_samples = len(X_digits)
n_samples


Out[54]:
1797

Extract 90% of our available data as training data for the models.


In [46]:
X_train = X_digits[:int(.9 * n_samples)]
X_train


Out[46]:
array([[  0.,   0.,   5., ...,   0.,   0.,   0.],
       [  0.,   0.,   0., ...,  10.,   0.,   0.],
       [  0.,   0.,   0., ...,  16.,   9.,   0.],
       ..., 
       [  0.,   0.,  12., ...,   0.,   0.,   0.],
       [  0.,   0.,   0., ...,   9.,   0.,   0.],
       [  0.,   0.,   1., ...,  16.,   5.,   0.]])

In [47]:
y_train = y_digits[:int(.9 * n_samples)]
y_train


Out[47]:
array([0, 1, 2, ..., 5, 0, 9])

Extract 10% of our available data as test data, to check the level of accuracy for the models.


In [76]:
X_test = X_digits[int(.9 * n_samples):]
y_test = y_digits[int(.9 * n_samples):]
X_test


Out[76]:
array([[  0.,   0.,   5., ...,   1.,   0.,   0.],
       [  0.,   0.,   6., ...,   9.,   6.,   2.],
       [  0.,   0.,   0., ...,   6.,   0.,   0.],
       ..., 
       [  0.,   0.,   1., ...,   6.,   0.,   0.],
       [  0.,   0.,   2., ...,  12.,   0.,   0.],
       [  0.,   0.,  10., ...,  12.,   1.,   0.]])

Use the K-Neighbours algorithm to create a Classifier implementing the k-nearest neighbors vote.


In [62]:
knn = neighbors.KNeighborsClassifier() # Retrieve the default K-Neighbours Classification algorithm
knn


Out[62]:
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
           metric_params=None, n_jobs=1, n_neighbors=5, p=2,
           weights='uniform')

In [63]:
fitting = knn.fit(X_train, y_train) # Train the algorithm on 90% of the samples
fitting


Out[63]:
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
           metric_params=None, n_jobs=1, n_neighbors=5, p=2,
           weights='uniform')

In [64]:
knn_score = fitting.score(X_test, y_test) # Score the algorithm on how well it fits the 10% of the data that was left out
print('KNN score: %f' % knn_score)


KNN score: 0.961111

Use the Logistic Regression algorithm


In [78]:
logistic = linear_model.LogisticRegression()
logistic


Out[78]:
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
          intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
          penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
          verbose=0, warm_start=False)

In [77]:
log_regression_fitting = logistic.fit(X_train, y_train)
log_regression_fitting


Out[77]:
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
          intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
          penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
          verbose=0, warm_start=False)

In [71]:
log_regression_score = log_regression_fitting.score(X_test, y_test)
print('LogisticRegression score: %f' % log_regression_score)


LogisticRegression score: 0.938889

We can see from the outcomes KNN was better at predicting the target result, with ~96% accuracy.


In [74]:
print('KNN score: %f' % knn_score)
print('LGR score: %f' % log_regression_score)


KNN score: 0.961111
LGR score: 0.938889

In [ ]:


In [ ]:


In [ ]: